Capture of visual attention interferes with multisensory speech processing
نویسندگان
چکیده
Attending to a conversation in a crowded scene requires selection of relevant information, while ignoring other distracting sensory input, such as speech signals from surrounding people. The neural mechanisms of how distracting stimuli influence the processing of attended speech are not well understood. In this high-density electroencephalography (EEG) study, we investigated how different types of speech and non-speech stimuli influence the processing of attended audiovisual speech. Participants were presented with three horizontally aligned speakers who produced syllables. The faces of the three speakers flickered at specific frequencies (19 Hz for flanking speakers and 25 Hz for the center speaker), which induced steady-state visual evoked potentials (SSVEP) in the EEG that served as a measure of visual attention. The participants' task was to detect an occasional audiovisual target syllable produced by the center speaker, while ignoring distracting signals originating from the two flanking speakers. In all experimental conditions the center speaker produced a bimodal audiovisual syllable. In three distraction conditions, which were contrasted with a no-distraction control condition, the flanking speakers either produced audiovisual speech, moved their lips, and produced acoustic noise, or moved their lips without producing an auditory signal. We observed behavioral interference in the reaction times (RTs) in particular when the flanking speakers produced naturalistic audiovisual speech. These effects were paralleled by enhanced 19 Hz SSVEP, indicative of a stimulus-driven capture of attention toward the interfering speakers. Our study provides evidence that non-relevant audiovisual speech signals serve as highly salient distractors, which capture attention in a stimulus-driven fashion.
منابع مشابه
Look who's talking: The deployment of visuo-spatial attention during multisensory speech processing under noisy environmental conditions
In a crowded scene we can effectively focus our attention on a specific speaker while largely ignoring sensory inputs from other speakers. How attended speech inputs are extracted from similar competing information has been primarily studied in the auditory domain. Here we examined the deployment of visuo-spatial attention in multiple speaker scenarios. Steady-state visual evoked potentials (SS...
متن کاملMultisensory enhancement of attentional capture in visual search.
Multisensory integration increases the salience of sensory events and, therefore, possibly also their ability to capture attention in visual search. This was investigated in two experiments where spatially uninformative color change cues preceded visual search arrays with color-defined targets. Tones were presented synchronously with these cues on half of all trials. Spatial-cuing effects indic...
متن کاملMultisensory conflict modulates the spread of visual attention across a multisensory object
Spatial attention to a visual stimulus that occurs synchronously with a task-irrelevant sound from a different location can lead to increased activity not only in the visual cortex, but also the auditory cortex, apparently reflecting the object-related spreading of attention across both space and modality (Busse et al., 2005). The processing of stimulus conflict, including multisensory stimulus...
متن کامل9 Auditory ‐ Visual Speech Processing Something Doesn ’ t Add Up ErIc VATIkIoTIS ‐
The multimodal production and multisensory perception of speech have received much research attention in the past 60 years since Sumby and Pollack’s landmark demonstration that being able to see a talker’s face in noisy acoustic conditions dramatically improves speech intelligibility (Sumby and Pollack 1954). Myriad studies have pursued various conceptual lines about the production and processi...
متن کاملAn fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex
Research on the neural basis of speech-reading implicates a network of auditory language regions involving inferior frontal cortex, premotor cortex and sites along superior temporal cortex. In audiovisual speech studies, neural activity is consistently reported in posterior superior temporal Sulcus (pSTS) and this site has been implicated in multimodal integration. Traditionally, multisensory i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره 6 شماره
صفحات -
تاریخ انتشار 2012